Learning generalizable policies that can adapt to unseen environments remains challenging in visual Reinforcement Learning (RL). Existing approaches try to acquire a robust representation via diversifying the appearances of in-domain observations for better generalization. Limited by the specific observations of the environment, these methods ignore the possibility of exploring diverse real-world image datasets. In this paper, we investigate how a visual RL agent would benefit from the off-the-shelf visual representations. Surprisingly, we find that the early layers in an ImageNet pre-trained ResNet model could provide rather generalizable representations for visual RL. Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner. Extensive experiments are conducted on DMControl Generalization Benchmark, DMControl Manipulation Tasks, Drawer World, and CARLA to verify the effectiveness of PIE-G. Empirical evidence suggests PIE-G improves sample efficiency and significantly outperforms previous state-of-the-art methods in terms of generalization performance. In particular, PIE-G boasts a 55% generalization performance gain on average in the challenging video background setting. Project Page: https://sites.google.com/view/pie-g/home.
translated by 谷歌翻译
Learning a risk-aware policy is essential but rather challenging in unstructured robotic tasks. Safe reinforcement learning methods open up new possibilities to tackle this problem. However, the conservative policy updates make it intractable to achieve sufficient exploration and desirable performance in complex, sample-expensive environments. In this paper, we propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent. Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control. Concretely, the baseline agent is responsible for maximizing rewards under standard RL settings. Thus, it is compatible with off-the-shelf training techniques of unconstrained optimization, exploration and exploitation. On the other hand, the safe agent mimics the baseline agent for policy improvement and learns to fulfill safety constraints via off-policy RL tuning. In contrast to training from scratch, safe policy correction requires significantly fewer interactions to obtain a near-optimal policy. The dual policies can be optimized synchronously via a shared replay buffer, or leveraging the pre-trained model or the non-learning-based controller as a fixed baseline agent. Experimental results show that our approach can learn feasible skills without prior knowledge as well as deriving risk-averse counterparts from pre-trained unsafe policies. The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks with respect to both safety constraint satisfaction and sample efficiency.
translated by 谷歌翻译
Safety comes first in many real-world applications involving autonomous agents. Despite a large number of reinforcement learning (RL) methods focusing on safety-critical tasks, there is still a lack of high-quality evaluation of those algorithms that adheres to safety constraints at each decision step under complex and unknown dynamics. In this paper, we revisit prior work in this scope from the perspective of state-wise safe RL and categorize them as projection-based, recovery-based, and optimization-based approaches, respectively. Furthermore, we propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection. This novel technique explicitly enforces hard constraints via the deep unrolling architecture and enjoys structural advantages in navigating the trade-off between reward improvement and constraint satisfaction. To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit, a toolkit that provides off-the-shelf interfaces and evaluation utilities for safety-critical tasks. We then perform a comparative study of the involved algorithms on six benchmarks ranging from robotic control to autonomous driving. The empirical results provide an insight into their applicability and robustness in learning zero-cost-return policies without task-dependent handcrafting. The project page is available at https://sites.google.com/view/saferlkit.
translated by 谷歌翻译
The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.
translated by 谷歌翻译
Humans can balance very well during walking, even when perturbed. But it seems difficult to achieve robust walking for bipedal robots. Here we describe the simplest balance controller that leads to robust walking for a linear inverted pendulum (LIP) model. The main idea is to use a linear function of the body velocity to determine the next foot placement, which we call linear foot placement control (LFPC). By using the Poincar\'e map, a balance criterion is derived, which shows that LFPC is stable when the velocity-feedback coefficient is located in a certain range. And that range is much bigger when stepping faster, which indicates "faster stepping, easier to balance". We show that various gaits can be generated by adjusting the controller parameters in LFPC. Particularly, a dead-beat controller is discovered that can lead to steady-state walking in just one step. The effectiveness of LFPC is verified through Matlab simulation as well as V-REP simulation for both 2D and 3D walking. The main feature of LFPC is its simplicity and inherent robustness, which may help us understand the essence of how to maintain balance in dynamic walking.
translated by 谷歌翻译
Visual reinforcement learning (RL), which makes decisions directly from high-dimensional visual inputs, has demonstrated significant potential in various domains. However, deploying visual RL techniques in the real world remains challenging due to their low sample efficiency and large generalization gaps. To tackle these obstacles, data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies by diversifying the training data. This survey aims to provide a timely and essential review of DA techniques in visual RL in recognition of the thriving development in this field. In particular, we propose a unified framework for analyzing visual RL and understanding the role of DA in it. We then present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data in different scenarios. Moreover, we report a systematic empirical evaluation of DA-based techniques in visual RL and conclude by highlighting the directions for future research. As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
translated by 谷歌翻译
机器人可以通过仅仅在单个对象实例上抓住姿势的证明,以任意姿势操纵类别内看不见的对象?在本文中,我们尝试通过使用Useek(一种无监督的SE(3) - 等级关键点方法来应对这一有趣的挑战,该方法在类别中享受整个实例的对齐方式,以执行可推广的操作。 USEEK遵循教师学生的结构,将无监督的关键点发现和SE(3) - 等级关键点检测解除。使用Useek,机器人可以以有效且可解释的方式推断与任务相关的对象框架,从而使任何类别内对象都从任何姿势中操纵。通过广泛的实验,我们证明了Useek产生的关键点具有丰富的语义,因此成功地将功能知识从演示对象转移到了新颖的对象。与其他进行操作的对象表示相比,面对大类别内形状差异,更健壮的演示率更有限,并且在推理时间更有效。
translated by 谷歌翻译
本文提出了一种有效且安全的方法,可以避免基于LiDAR的静态和动态障碍。首先,点云用于生成实时的本地网格映射以进行障碍物检测。然后,障碍物由DBSCAN算法聚集,并用最小边界椭圆(MBE)包围。此外,进行数据关联是为了使每个MBE与当前帧中的障碍匹配。考虑到MBE作为观察,Kalman滤波器(KF)用于估计和预测障碍物的运动状态。通过这种方式,可以将远期时间域中每个障碍物的轨迹作为一组椭圆化。由于MBE的不确定性,参数化椭圆形的半肢和半尺寸轴被扩展以确保安全性。我们扩展了传统的控制屏障功能(CBF),并提出动态控制屏障功能(D-CBF)。我们将D-CBF与模型预测控制(MPC)结合起来,以实施安全至关重要的动态障碍。进行了模拟和实际场景中的实验,以验证我们算法的有效性。源代码发布以供社区参考。
translated by 谷歌翻译
在动态环境中,持续增强学习(CRL)的关键挑战是,随着环境在其生命周期的变化,同时最大程度地减少对学习的信息的灾难性忘记,随着环境在其一生中的变化而变化。为了应对这一挑战,在本文中,我们提出了Dacorl,即动态自动持续RL。 Dacorl使用渐进式上下文化学习了上下文条件条件的策略,该策略会逐步将动态环境中的一系列固定任务群集成一系列上下文,并选择一个可扩展的多头神经网络以近似策略。具体来说,我们定义了一组具有类似动力学的任务,并将上下文推理形式化为在线贝叶斯无限高斯混合物集群的过程,这些过程是在环境特征上,诉诸在线贝叶斯推断,以推断上下文的后端分布。在以前的中国餐厅流程的假设下,该技术可以将当前任务准确地分类为先前看到的上下文,或者根据需要实例化新的上下文,而无需依靠任何外部指标来提前向环境变化发出信号。此外,我们采用了可扩展的多头神经网络,其输出层与新实例化的上下文同步扩展,以及一个知识蒸馏正规化项来保留学习任务的性能。作为一个可以与各种深度RL算法结合使用的一般框架,Dacorl在稳定性,整体性能和概括能力方面具有一致的优势,而不是现有方法,这是通过对几种机器人导航和Mujoco Socomotion任务进行的广泛实验来验证的。
translated by 谷歌翻译
在这项工作中,我们提出了一种新的方法,用于利用极化线索来详细地重建透明对象。大多数现有方法通常缺乏足够的限制,并且遭受了过度平滑的问题。因此,我们将极化信息作为互补提示引入。我们将对象的几何形状隐式表示为神经网络,而极化渲染能够从给定的形状和照明配置中呈现对象的极化图像。由于透明对象的传输,将渲染的极化图像与现实世界捕获的图像进行直接比较将存在其他错误。为了解决这个问题,引入了代表反射部分比例的反射百分比的概念。反射百分比由射线示踪剂计算,然后用于加权极化损失。我们为多视图透明形状重建构建极化数据集以验证我们的方法。实验结果表明,我们的方法能够恢复详细的形状并提高透明物体的重建质量。我们的数据集和代码将在https://github.com/shaomq2187/transpir上公开获得。
translated by 谷歌翻译